ai accountability
What Stanford's recent AI conference reveals about the state of AI accountability
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. As AI adoption continues to ramp up exponentially, so is the discussion around -- and concern for -- accountable AI. While tech leaders and field researchers understand the importance of developing AI that is ethical, safe and inclusive, they still grapple with issues around regulatory frameworks and concepts of "ethics washing" or "ethics shirking" that diminish accountability. Perhaps most importantly, the concept is not yet clearly defined. While many sets of suggested guidelines and tools exist -- from the U.S. National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework to the European Commission's Expert Group on AI, for example -- they are not cohesive and are very often vague and overly complex.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
AI accountability: Who's responsible when AI goes wrong?
AI systems sometimes run amok. One chatbot, designed by Microsoft to mimic a teenager, began spewing racist hate speech within hours of its release online. Microsoft immediately took the bot down. Another system, which Amazon designed to help its recruiting efforts but ultimately didn't release, inadvertently discriminated against female applicants. Other so-called "smart" systems have led to false arrests, biased bail amounts for criminal defendants, and even fatal car crashes. Experts expect to see more cases of problematic AI as organizations increasingly implement intelligent technology, sometimes doing so without adopting the proper governance in place.
- Law (1.00)
- Government (0.97)
AI Accountability: Proceed at Your Own Risk - InformationWeek
A report issued by technology research firm Forrester, AI Aspirants: Caveat Emptor, highlights the growing need for third-party accountability in artificial intelligence tools. The report found that a lack of accountability in AI can result in regulatory fines, brand damage, and lost customers, all of which can be avoided by performing third-party due diligence and adhering to emerging best practices for responsible AI development and deployment. The risks of getting AI wrong are real and, unfortunately, they're not always directly within the enterprise's control, the report observed. "Risk assessment in the AI context is complicated by a vast supply chain of components with potentially nonlinear and untraceable effects on the output of the AI system," it stated. Most enterprises partner with third parties to create and deploy AI systems because they don't have the necessary technology and skills in house to perform these tasks on their own, said report author Brandon Purcell, a Forrester principal analyst who covers customer analytics and artificial intelligence issues.
AI Transparency: Let's Talk About AI Accountability
In recent years, academicians and corporate professionals have requested greater transparency in the inner workings of artificial intelligence (AI) models, and for many good reasons. In a Harvard Business Review post, Andrew Burt, Immuta's chief legal officer, points out that transparency can help mitigate certain problems, such as fairness, discrimination and trust, in a scenario in which, for example, the new Apple's credit card has been accused of sexist loan models, while Amazon scrapped an AI tool to hire after discovering it discriminated against women. At the same time, it is becoming clear that the disclosure of AI information poses its own risks: greater disclosure of information can make AI more vulnerable to attack, while the more information is reported, the more companies can be susceptible to lawsuits or regulatory actions. "Let's call it the AI transparency paradox: while generating more information about AI could bring real benefits, it could also create new risks. To navigate this paradox, organizations will need to think carefully about how they handle AI risks, the information they generate about these risks, and how that information is shared and protected, "says Burt.
- North America > United States > California > Orange County > Irvine (0.05)
- North America > United States > California > Alameda County > Berkeley (0.05)
Ethical AI Can Ensure AI's Superpower Is Used For Good
You don't have to go far or think hard to come up with scenarios -- real or imagined -- in which new technology can go wrong. Artificial intelligence (AI) has played a starring role in many such stories. But it's important to remember why humans created AI in the first place. When used correctly and ethically, AI can be an amazing tool for good. AI allows for more efficient use of resources, increased productivity and better customer experiences.
We Need to Tell Better Stories About Our AI Future - Motherboard
Discussions about the ethics, safety, and societal impact of Artificial Intelligence seem to come back to the same cultural touch points found in AI stories that warn of worst-case scenarios. Whether in press coverage or in policy position papers, we keep going back to the same stories. We need to tell more diverse and realistic stories about AI if we want to understand how these technologies fit into our society today, and in the future. In Kubrick's HAL 9000, the calm assistant shuts Dave out of the system in 2001. In The Terminator, the AI defense system SkyNet becomes self-aware and initiates a nuclear holocaust to decimate the human race.
UK tech committee: It's time to lay down the law on AI accountability
A UK parliamentary committee has appealed the UK government to take action and begin seriously considering "a host of social, ethical and legal questions" that are increasingly pertinent thanks to the rise of artificial intelligence. The science and technology committee started its inquiry in March 2016, visiting Google's DeepMind office, gathering 67 written statements, and interviewing 12 witnesses in person in order to establish the most urgent issues. In its newly published report, the committee has concluded that "while it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now." The biggest reason for this is the need to ensure that the UK is building socially beneficial AI systems, and one of the best ways to make this happen is to start a wider public dialogue on the issue. There are three main issues that the committee flags up as requiring "serious" consideration: minimizing bias being accidentally built into AI systems; ensuring that the decisions they make are transparent; and establishing ways to verify that AI systems are operating as intended and won't behave unpredictably. In these early stages, the committee advises in its report that the government creates a standing Commission on Artificial Intelligence with a broad membership that is able to provide a wide range of expertise.
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.73)
AI accountability needs action now, say UK MPs
A UK parliamentary committee has urged the government to act proactively -- and to act now -- to tackle "a host of social, ethical and legal questions" arising from growing usage of autonomous technologies such as artificial intelligence. "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," says the committee. "Not only would this help to ensure that the UK remains focused on developing'socially beneficial' AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time." The committee kicked off an enquiry into AI and robotics this March, going on to take 67 written submissions and hear from 12 witnesses in person, in addition to visiting Google DeepMind's London office. Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need "serious, ongoing consideration" -- including: "[W]itnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed," it notes in the report.
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.97)
AI accountability needs action now, say UK MPs
A UK parliamentary committee has urged the government to act proactively -- and to act now -- to tackle "a host of social, ethical and legal questions" arising from the rise of autonomous technologies such as artificial intelligence. "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," says the committee. "Not only would this help to ensure that the UK remains focused on developing'socially beneficial' AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time." The committee kicked off an enquiry into AI and robotics this March, going on to take 67 written submissions and hear from 12 witnesses in person, in addition to visiting Google DeepMind's London office. Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need "serious, ongoing consideration" -- including: "[W]itnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed," it notes in the report.
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.97)